Multivariate KF Examples
Example 1 - rocket altitude estimation
In this example, we will estimate the altitude of a rocket, that is, we will design a two-dimensional Kalman Filter with a control input. We assume constant acceleration dynamics.
The rocket is equipped with an on-board altimeter that provides altitude measurements. In addition, the rocket is equipped with an accelerometer that measures the rocket’s acceleration, this serves as a control input to the Kalman Filter.
Accelerometers don’t sense gravity. Thus, we need to subtract the gravitational acceleration constant \(g\) from each accelerometer measurement.The accelerometer measurement at time step n is: \(a_{n} = \ddot{x} - g + \epsilon\) Where: \(\ddot{x}\) is the actual acceleration of the object (the second derivative of the object position), \(g\) is the gravitational acceleration constant (\(g\) = −9.8\(\frac{m}{s^2}\)) and \(\epsilon\) is the accelerometer measurement error.
Kalman Filter equations
For the state extrapolation equation: \(\begin{array}{ccc} \begin{bmatrix} \hat{x}_{n+1,n} \\ \hat{\dot{x}}_{n+1,n} \end{bmatrix} = & \begin{bmatrix} 1 & \Delta t\\ 0 & 1 \end{bmatrix} & \begin{bmatrix} \hat{x}_{n,n} \\ \hat{\dot{x}}_{n,n} \end{bmatrix} + & \begin{bmatrix} 0.5\Delta t^2\\ \Delta t \end{bmatrix} & (a_{n} + g) \end{array}\)
For the covariance extrapolation equation: \[ P = \begin{bmatrix} p_{x} & p_{x\dot{x}} \\ p_{\dot{x}x} & p_{\dot{x}} \\ \end{bmatrix} \]
The process noise matrix, assuming a discrete noise model: \[\begin{align*} Q = & \begin{bmatrix} \sigma^2_{x} & \sigma^2_{x\dot{x}}\\ \sigma^2_{\dot{x}x} & \sigma^2_{\dot{x}}\\ \end{bmatrix} \\ &= \epsilon^2 \begin{bmatrix} \frac{\Delta t^4}{4} & \frac{\Delta t^3}{2} \\ \frac{\Delta t^3}{2} & \Delta t^2 \\ \end{bmatrix} \end{align*}\]
The accelerometer error \(v\) is much lower than the system’s random acceleration; therefore, we use \(\epsilon^2\) as a multiplier of the process noise matrix. It makes our estimation uncertainty much lower.
In this example, the acceleration is handled by control input (therefore, it is not part of \(F\) matrix, and the \(Q\) matrix is \(2 × 2\)). Without an external control input, the process matrix would be \(3 × 3\).
The measurement equation:
The measurement provides only the altitude of the rocket: \[ z_{n} = \begin{bmatrix} x_{n, measured} \end{bmatrix} \]
The dimension of \(z_{n}\) is \(1 × 1\) and the dimension of \(x_{n}\) is \(2 × 1\), so the dimension of the observation matrix \(\textbf{H}\) is \(1 × 2\).
\[ \textbf{H} = \begin{bmatrix} 1 & 0 \end{bmatrix} \]
The measurement uncertainty: \[ \textbf{R}_{n} = \begin{bmatrix} \sigma^2_{x_{m}} \end{bmatrix} \]
The subscript \(m\) means the “measurement”, but for this example we assume a constant measurement uncertainty: \[\textbf{R}_{1} = ... = \textbf{R}_{n}=\textbf{R}\]
So we have defined all the necessary blocks for the Kalman Gain, The state update equation and The covariance update equation.
Numerical example
Let us assume a vertically boosting rocket with constant acceleration. The rocket is equipped with an altimeter that provides altitude measurements and an accelerometer that serves as a control input.
# The measurements period: ∆t = 0.25s
# The rocket acceleration: x = 30 m/s^2
# The altimeter measurement error standard deviation: σ_xm = 20m
# The accelerometer measurement error standard deviation: ϵ = 0.1
# The state transition matrix F would be:
transF <- rbind(c(1, 0.25),
c(0, 1))
# The control matrix G would be:
G <- c(0.5*0.25**2, 0.25)
# The process noise matrix Q would be:
Q <- 0.1**2 * rbind(c(0.25**4/4, 0.25**3/2),
c(0.25**3/2, 0.25**2))
# The measurement uncertainty R would be:
R <- c(20**2)
# The observation matrix H
H <- cbind(1, 0)# The set of 30 noisy measurements of the altitude hn and acceleration an:
hn <- c(6.43, 1.3, 39.43, 45.89, 41.44, 48.7, 78.06, 80.08, 61.77, 75.15, 110.39, 127.83, 158.75, 156.55, 213.32, 229.82, 262.8, 297.57, 335.69, 367.92, 377.19, 411.18, 460.7, 468.39, 553.9, 583.97, 655.15, 723.09, 736.85, 787.22)
an <- c(39.81, 39.67, 39.81, 39.84, 40.05, 39.85, 39.78, 39.65, 39.67, 39.78, 39.59, 39.87, 39.85, 39.59, 39.84, 39.9, 39.63, 39.59, 39.76, 39.79, 39.73, 39.93, 39.83, 39.85, 39.94, 39.86, 39.76, 39.86, 39.74, 39.94)Initialization
# We don’t know the rocket’s location; we set the initial position and velocity to 0
x00 <- c(0,0)
# We assume
u0 = 0
# Since our initial state vector is a guess, we set a very high estimate uncertainty
p00 <- rbind(c(500, 0), c(0, 500))Prediction
## [,1]
## [1,] 0
## [2,] 0
## [,1] [,2]
## [1,] 531.2500 125.0001
## [2,] 125.0001 500.0006
First iteration
## [,1]
## [1,] 0.5704698
## [2,] 0.1342283
## [,1]
## [1,] 3.6681208
## [2,] 0.8630878
p11 <- (diag(dim(k1)[1]) - k1 %*% H) %*% p10 %*% t(diag(dim(k1)[1]) - k1 %*% H) + k1 %*% R %*% t(k1); p11## [,1] [,2]
## [1,] 228.18792 53.69131
## [2,] 53.69131 483.22208
# Predict, the prediction uncertainty is still very high
x21 <- transF %*% x11 + G * (an[1] - 9.8); x21## [,1]
## [1,] 4.821705
## [2,] 8.365588
## [,1] [,2]
## [1,] 285.2350 174.4969
## [2,] 174.4969 483.2227
Generalization
multiKF2 <- function(transMat, processNoiseMat, H, R, G, iniX, iniP, hn, an){
# Initialization
x <- list(iniX)
u <- list(0)
p <- list(iniP)
k <- list(0)
# Prediction
x <- append(x, list(transMat %*% x[[1]] + G * u[[1]]))
p <- append(p, list(transMat %*% p[[1]] %*% t(transMat) + processNoiseMat))
for (i in 2:length(hn)){
# Update
k <- append(k, list(p[[length(p)]] %*% t(H) %*% solve(H %*% p[[length(p)]] %*% t(H) + R)))
# Estimate the current state
x <- append(x, list(x[[length(x)]] + k[[i]] %*% (hn[i-1] - H %*% x[[length(x)]])))
# Update the current estimate uncertainty
p <- append(p, list((diag(dim(k[[i]])[1]) - k[[i]] %*% H) %*% p[[length(p)]] %*% t(diag(dim(k[[i]])[1]) - k[[i]] %*% H) + k[[i]] %*% R %*% t(k[[i]])))
# Predict
x <- append(x, list(transMat %*% x[[length(x)]] + G * (an[i-1] - 9.8)))
p <- append(p, list(transMat %*% p[[length(p)]] %*% t(transMat) + processNoiseMat))
}
return(list(x, p, k))
}
result <- multiKF2(transF, Q, H, R, G, x00, p00, hn, an)
x <- result[[1]]
p <- result[[2]]
k <- result[[3]]Visualization
Rocket Altitude
The true values of a vertically boosting rocket with constant acceleration \(30 m/s^2\) are calculated by \(x=ut+\frac{1}{2}at^2\) so assuming the rocket starts from rest \(u = 0\):
# True values for the first 7.25 seconds for each 0.25s
acceleration <- 30
trueValposX <- 0.5 * acceleration * seq(0, 7.25, by = 0.25)^2
# Take the estimates in the x axis
estimatesX <- c()
for (i in seq(3, 60, by = 2)){
estimatesX = append(estimatesX, list(x[[i]][1]))
}
estimatesX = append(estimatesX, list(797.07))
# Take the estimates in the x axis
estimSigmaX <- c()
for (i in seq(3, 60, by = 2)){
estimSigmaX = append(estimSigmaX, list(p[[i]][1,1]))
}
estimSigmaX = append(estimSigmaX, list(49.3))# Dataframe with all the data
data <- data.frame(
time = c(1:30),
estimX = unlist(estimatesX),
measurements.x = hn,
# the 95% confidence interval is given by [-1.96 sqrt(V); +1.96 sqrt(V)]
interMax = unlist(estimatesX) + 1.96 * sqrt(unlist(estimSigmaX)),
interMin = unlist(estimatesX) - 1.96 * sqrt(unlist(estimSigmaX)),
trueValX = trueValposX
)# Plot to compare the true value, measured values and estimates
plot1 <- ggplot(data, aes(x = time)) +
geom_path(aes(y = trueValX, color = "True values")) +
geom_point(aes(y = trueValX, color = "True values", shape = "True values")) +
geom_path(aes(y = measurements.x, color = "Measurements")) +
geom_point(aes(y = measurements.x, color = "Measurements", shape = "Measurements")) +
geom_path(aes(y = estimX, color = "Estimates")) +
geom_point(aes(y = estimX, color = "Estimates", shape = "Estimates")) +
geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
labs(title = "Rocket Altitude",
x = "Time(s)",
y = "Altitude(m)",
color = "") +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "95% confidence interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, '95% confidence interval' = 1), name = "")
ggplotly(plot1) %>% config(displayModeBar = FALSE)We can see a good KF tracking performance and convergence.
Rocket Velocity
The true values of a vertically boosting rocket with constant acceleration \(30 m/s^2\) are calculated by \(v=u+at\) so assuming the rocket starts from rest \(u = 0\):
# True values for the first 7.25 seconds for each 0.25s
acceleration <- 30
trueValposV <- acceleration * seq(0, 7.25, by = 0.25)
# Take the estimates
estimatesV <- c()
for (i in seq(3, 60, by = 2)){
estimatesV = append(estimatesV, list(x[[i]][2]))
}
estimatesV = append(estimatesV, list(215.7))
# Take the estimates
estimSigmaV <- c()
for (i in seq(3, 60, by = 2)){
estimSigmaV = append(estimSigmaV, list(p[[i]][2,2]))
}
estimSigmaV = append(estimSigmaV, list(2.6))# Dataframe with all the data
dataV <- data.frame(
time = c(1:30),
estimX = unlist(estimatesV),
measurements.x = an,
# the 95% confidence interval is given by [-1.96 sqrt(V); +1.96 sqrt(V)]
interMax = unlist(estimatesV) + 1.96 * sqrt(unlist(estimSigmaV)),
interMin = unlist(estimatesV) - 1.96 * sqrt(unlist(estimSigmaV)),
trueValX = trueValposV
)# Plot to compare the true value, measured values and estimates
plot2 <- ggplot(dataV, aes(x = time)) +
geom_path(aes(y = trueValX, color = "True values")) +
geom_point(aes(y = trueValX, color = "True values", shape = "True values")) +
geom_path(aes(y = estimX, color = "Estimates")) +
geom_point(aes(y = estimX, color = "Estimates", shape = "Estimates")) +
geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
labs(title = "Rocket Velocity",
x = "Time(s)",
y = "Velocity(m/s)",
color = "") +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "95% confidence interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, '95% confidence interval' = 1), name = "")
ggplotly(plot2) %>% config(displayModeBar = FALSE)It takes about a few seconds to converge the estimates to the true rocket velocity. In the beginning, the estimated altitude is influenced by measurements, and it is not aligned well with the true rocket altitude since the measurements are very noisy. But as the KF converges, the noisy measurement has less influence, and the estimated altitude is well aligned with the true altitude.
Example 2 - vehicle location estimation
We design a six-dimensional Kalman Filter without control input. That is, we estimate a vehicle’s location on the XY plane. The vehicle has an on-board location sensor that reports X and Y coordinates of the system. We assume constant acceleration dynamics.
Kalman Filter equations
There is no control input so: \(\textbf{u} = 0\)
The state extrapolation equation: \[\hat{\textbf{x}}_{n+1, n} = \textbf{F} \hat{\textbf{x}}_{n, n}\]
The system state \(x_{n}\) is defined by: \[ \textbf{x}_{n} = \begin{bmatrix} x_{n} \\ \dot{x}_{n} \\ \ddot{x}_{n} \\ y_{n} \\ \dot{y}_{n} \\ \ddot{y}_{n} \end{bmatrix} \]
So the extrapolated vehicle state for time n + 1 can be described by:
\(\begin{array}{ccc} \begin{bmatrix} \hat{x}_{n+1,n} \\ \hat{\dot{x}}_{n+1,n} \\ \hat{\ddot{x}}_{n+1,n} \\ \hat{y}_{n+1,n} \\ \hat{\dot{y}}_{n+1,n} \\ \hat{\ddot{y}}_{n+1,n} \end{bmatrix} = & \begin{bmatrix} 1 & \Delta t & 0.5 \Delta t^2 & 0 & 0 & 0\\ 0 & 1 & \Delta t & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & \Delta t & 0.5 \Delta t^2\\ 0 & 0 & 0 & 0 & 1 & \Delta t \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} \hat{x}_{n,n} \\ \hat{\dot{x}}_{n,n} \\ \hat{\ddot{x}}_{n,n} \\ \hat{y}_{n,n} \\ \hat{\dot{y}}_{n,n} \\ \hat{\ddot{y}}_{n,n} \end{bmatrix} \end{array}\)
The covariance extrapolation equation:
The estimate covariance matrix is, assuming that the estimation error in \(X\) and \(Y\) axes are not correlated: \[ P = \begin{bmatrix} p_{x} & p_{x\dot{x}} & p_{x\ddot{x}} & 0 & 0 & 0 \\ p_{\dot{x}x} & p_{\dot{x}} & p_{\dot{x}\ddot{x}} & 0 & 0 & 0 \\ p_{\ddot{x}x} & p_{\ddot{x}\dot{x}} & p_{\ddot{x}} & 0 & 0 & 0\\ 0 & 0 & 0 & p_{y} & p_{y\dot{y}} & p_{y\ddot{y}} \\ 0 & 0 & 0 & p_{\dot{y}y} & p_{\dot{y}} & p_{\dot{y}\ddot{y}} \\ 0 & 0 & 0 & p_{\ddot{y}y} & p_{\ddot{y}\dot{y}} & p_{\ddot{y}} \\ \end{bmatrix} \]
The process noise matrix, assuming a discrete noise model: \[\begin{align*} Q = & \begin{bmatrix} \sigma^2_{x} & \sigma^2_{x\dot{x}} & \sigma^2_{x\ddot{x}} & 0 & 0 & 0 \\ \sigma^2_{\dot{x}x} & \sigma^2_{\dot{x}} & \sigma^2_{\dot{x}\ddot{x}} & 0 & 0 & 0 \\ \sigma^2_{\ddot{x}x} & \sigma^2_{\ddot{x}\dot{x}} & \sigma^2_{\ddot{x}} & 0 & 0 & 0\\ 0 & 0 & 0 & \sigma^2_{y} & \sigma^2_{y\dot{y}} & \sigma^2_{y\ddot{y}} \\ 0 & 0 & 0 & \sigma^2_{\dot{y}y} & \sigma^2_{\dot{y}} & \sigma^2_{\dot{y}\ddot{y}} \\ 0 & 0 & 0 & \sigma^2_{\ddot{y}y} & \sigma^2_{\ddot{y}\dot{y}} & \sigma^2_{\ddot{y}} \\ \end{bmatrix} \\ &= \sigma^2_{a} \begin{bmatrix} \frac{\Delta t^4}{4} & \frac{\Delta t^3}{2} & \frac{\Delta t^2}{2} & 0 & 0 & 0 \\ \frac{\Delta t^3}{2} & \Delta t^2 & \Delta t & 0 & 0 & 0 \\ \frac{\Delta t^2}{2} & \Delta t & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & \frac{\Delta t^4}{4} & \frac{\Delta t^3}{2} & \frac{\Delta t^2}{2} \\ 0 & 0 & 0 & \frac{\Delta t^3}{2} & \Delta t^2 & \Delta t \\ 0 & 0 & 0 & \frac{\Delta t^2}{2} & \Delta t & 1 \\ \end{bmatrix} \end{align*}\]
The measurement equation:
The measurement provides us only \(X\) and \(Y\) coordinates of the vehicle: \[ z_{n} = \begin{bmatrix} x_{n, measured} \\ y_{n, measured} \end{bmatrix} \]
The dimension of \(z_{n}\) is \(2×1\) and the dimension of \(x_{n}\) is \(6×1\). Therefore the dimension of the observation matrix \(H\) shall be \(2 × 6\). \[ \textbf{H} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \end{bmatrix} \]
The measurement uncertainty, assuming that the \(x\) and \(y\) measurements are uncorrelated: \[ \textbf{R}_{n} = \begin{bmatrix} \sigma^2_{x_{m}} & 0 \\ 0 & \sigma^2_{y_{m}} \end{bmatrix} \]
The subscript \(m\) is for measurement uncertainty, but for this example we assume a constant measurement uncertainty: \[\textbf{R}_{1} = ... = \textbf{R}_{n}=\textbf{R}\]
So we have defined all the necessary blocks for the Kalman Gain, The state update equation and The covariance update equation.
Numerical example
Let us assume a vehicle moving in a straight line in the X direction with a constant velocity. After traveling 400 meters the vehicle turns right, with a turning radius of 300 meters. During the turning maneuver, the vehicle experiences acceleration due to the circular motion (angular acceleration).
# The measurements period (s), ∆t = 1s
deltaT = 1
# The random acceleration standard deviation (m/s^2)
sigma_a = 0.2
# The measurement error standard deviation (m)
sigma_xm = 3
sigma_ym = 3
# The state transition matrix F would be
vec1 <- c(1, 1, 0.5, 0, 0, 0)
vec2 <- c(0, 1, 1, 0, 0, 0)
vec3 <- c(0, 0, 1, 0, 0, 0)
vec4 <- c(0, 0, 0, 1, 1, 0.5)
vec5 <- c(0, 0, 0, 0, 1, 1)
vec6 <- c(0, 0, 0, 0, 0, 1)
transMat <- rbind(vec1, vec2, vec3, vec4, vec5, vec6)
# The process noise matrix Q would be
vec1 <- c(1/4, 1/2, 1/2, 0, 0, 0)
vec2 <- c(1/2, 1, 1, 0, 0, 0)
vec3 <- c(1/2, 1, 1, 0, 0, 0)
vec4 <- c(0, 0, 0, 1/4, 1/2, 1/2)
vec5 <- c(0, 0, 0, 1/2, 1, 1)
vec6 <- c(0, 0, 0, 1/2, 1, 1)
processNoiseMat <- rbind(vec1, vec2, vec3, vec4, vec5, vec6) * sigma_a^2
# The measurement covariance R would be
R <- rbind(c(sigma_xm**2, 0), c(0, sigma_ym**2))
# Observation matrix
H <- rbind(c(1, 0, 0, 0, 0, 0), c(0, 0, 0, 1, 0, 0))# The set of 35 noisy measurements
x <- c(301.5, 298.23, 297.83, 300.42, 301.94, 299.5, 305.98, 301.25, 299.73, 299.2, 298.62, 301.84, 299.6, 295.3, 299.3, 301.95, 296.3, 295.11, 295.12, 289.9, 283.51, 276.42, 264.22, 250.25, 236.66, 217.47, 199.75, 179.7, 160, 140.92, 113.53, 93.68, 69.71, 45.93, 20.87)
y <- c(-401.46, -375.44, -346.15, -320.2, -300.08, -274.12, -253.45, -226.4, -200.65, -171.62, -152.11, -125.19, -93.4, -74.79, -49.12, -28.73, 2.99, 25.65, 49.86, 72.87, 96.34, 120.4, 144.69, 168.06, 184.99, 205.11, 221.82, 238.3, 253.02, 267.19, 270.71, 285.86, 288.48, 292.9, 298.77)
zn <- cbind(x,y)Iteration Zero
# Initialization
x00 <- c(0, 0, 0, 0, 0, 0) # as we do not know the vehicle location
# Since is a guess, we set a very high estimate uncertainty
# The high estimate uncertainty results in a high Kalman Gain by giving a high weight to the measurement.
P00 <- diag(c(500, 500, 500, 500, 500, 500))Prediction
## [,1]
## vec1 0
## vec2 0
## vec3 0
## vec4 0
## vec5 0
## vec6 0
## vec1 vec2 vec3 vec4 vec5 vec6
## vec1 1125.01 750.02 250.02 0.00 0.00 0.00
## vec2 750.02 1000.04 500.04 0.00 0.00 0.00
## vec3 250.02 500.04 500.04 0.00 0.00 0.00
## vec4 0.00 0.00 0.00 1125.01 750.02 250.02
## vec5 0.00 0.00 0.00 750.02 1000.04 500.04
## vec6 0.00 0.00 0.00 250.02 500.04 500.04
First Iteration
## [1] 301.50 -401.46
# Step 2 - Update
# The Kalman Gain calculation:
k1 <- P10 %*% t(H) %*% solve(H %*% P10 %*% t(H) + R); k1## [,1] [,2]
## vec1 0.9920636 0.0000000
## vec2 0.6613875 0.0000000
## vec3 0.2204742 0.0000000
## vec4 0.0000000 0.9920636
## vec5 0.0000000 0.6613875
## vec6 0.0000000 0.2204742
Therefore, we can see that the Kalman Gain for a position is 0.9921, which means that the weight of the first measurement is significantly higher than the weight of the estimation; that is the weight of the estimation is negligible.
## [,1]
## vec1 299.10716
## vec2 199.40832
## vec3 66.47299
## vec4 -398.27384
## vec5 -265.52061
## vec6 -88.51159
# Update the current estimate uncertainty
P11 <- (diag(6) - k1 %*% H) %*% P10 %*% t(diag(6) - k1 %*% H) + k1 %*% R %*% t(k1); P11## vec1 vec2 vec3 vec4 vec5 vec6
## vec1 8.928572 5.952487 1.984268 0.000000 0.000000 0.000000
## vec2 5.952487 503.986173 334.679906 0.000000 0.000000 0.000000
## vec3 1.984268 334.679906 444.917029 0.000000 0.000000 0.000000
## vec4 0.000000 0.000000 0.000000 8.928572 5.952487 1.984268
## vec5 0.000000 0.000000 0.000000 5.952487 503.986173 334.679906
## vec6 0.000000 0.000000 0.000000 1.984268 334.679906 444.917029
## [,1]
## vec1 531.75198
## vec2 265.88131
## vec3 66.47299
## vec4 -708.05025
## vec5 -354.03220
## vec6 -88.51159
## vec1 vec2 vec3 vec4 vec5 vec6
## vec1 972.7232 1236.4213 559.1427 0.0000 0.0000 0.0000
## vec2 1236.4213 1618.3030 779.6369 0.0000 0.0000 0.0000
## vec3 559.1427 779.6369 444.9570 0.0000 0.0000 0.0000
## vec4 0.0000 0.0000 0.0000 972.7232 1236.4213 559.1427
## vec5 0.0000 0.0000 0.0000 1236.4213 1618.3030 779.6369
## vec6 0.0000 0.0000 0.0000 559.1427 779.6369 444.9570
Generalization
multiKF <- function(transMat, processNoiseMat, H, R, iniX, iniP, zn){
# Initialization
x <- list(iniX)
p <- list(iniP)
k <- list(0)
# Prediction
x <- append(x, list(transMat %*% x[[1]]))
p <- append(p, list(transMat %*% p[[1]] %*% t(transMat) + processNoiseMat))
for (i in 2:dim(zn)[1]){
# Measure
z <- c(zn[i-1, 1], zn[i-1, 2])
# Update
k <- append(k, list(p[[length(p)]] %*% t(H) %*% solve(H %*% p[[length(p)]] %*% t(H) + R)))
# Estimate the current state
x <- append(x, list(x[[length(x)]] + k[[i]] %*% (z - H %*% x[[length(x)]])))
# Update the current estimate uncertainty
p <- append(p, list((diag(dim(k[[i]])[1]) - k[[i]] %*% H) %*% p[[length(p)]] %*% t(diag(dim(k[[i]])[1]) - k[[i]] %*% H) + k[[i]] %*% R %*% t(k[[i]])))
# Predict
x <- append(x, list(transMat %*% x[[length(x)]]))
p <- append(p, list(transMat %*% p[[length(p)]] %*% t(transMat) + processNoiseMat))
}
return(list(x, p, k))
}
result <- multiKF(transMat, processNoiseMat, H, R, x00, P00, zn)
x <- result[[1]]
p <- result[[2]]
k <- result[[3]]Visualization
Vehicle Position
The true values of the position in the x axis will be: for the first
15s \(300\) and for the other 20s
remaining \(300\cos\theta(t)\)
\(\theta(t) = \frac{(t-15) \pi}{38}\)
The true values of the position in the x axis will be: the first
value \(-400\) and then for the next
14s \(-400 + 26*t\) and for the other
20s remaining \(300\sin\theta(t)\)
\(\theta(t) = \frac{(t-15)
\pi}{38}\)
# True values
trueValposX <- c(rep(300, 15), 300 * cos(((16:35 - 15) * pi) / 38))
trueValposY <- c(-400, -400 + 26*1:13, 0, 300 * sin(((16:35 - 15) * pi) / 38))
# Take the estimates in the x and y axis
estimatesX <- c()
estimatesY <- c()
for (i in seq(3, 70, by = 2)){
estimatesX = append(estimatesX, list(x[[i]][1]))
estimatesY = append(estimatesY, list(x[[i]][4]))
}
estimatesX = append(estimatesX, list(19.3))
estimatesY = append(estimatesY, list(297.79))
# Confidence ellipse for the probability of 95%:
scaleFactor <- sqrt(-2*log(1 - 0.95))
muValX <- c()
muValY <- c()
phiVal <- c()
kaVal <- c()
kbVal <- c()
for (i in seq(3, 70, by = 2)){
# Get the covariance matrix for each estimation
sigma2 <- rbind(c(p[[i]][1,1], 0),
c(0, p[[i]][4,4]))
# Get the eigen values and eigen vectors
res <- eigen(sigma2)
# θ = arctan(vy/vx)
phiVal <- append(phiVal, list(atan(res$vectors[2,1] / res$vectors[1,1])))
# k * the highest eigenvalue square root
kaVal <- append(kaVal, list(scaleFactor * sqrt(res$values[1])))
# k * the second eigenvalue square root
kbVal <- append(kbVal, list(scaleFactor * sqrt(res$values[2])))
# Centre of the ellipse, each estimation in the x and y axis
muValX <- append(muValX, list(c(x[[i]][1])))
muValY <- append(muValY, list(c(x[[i]][4])))
}
muValX <- append(muValX, c(19.3))
muValY <- append(muValY, c(297.79))
phiVal <- append(phiVal, c(-1.570796))
kaVal <- append(kaVal, c(scaleFactor * sqrt(5)))
kbVal <- append(kbVal, c(scaleFactor * sqrt(5)))# Dataframe with all the data
data <- data.frame(
estimX = unlist(estimatesX), # only the estimates
estimY = unlist(estimatesY),
measurements.x = zn[,1],
measurements.y = zn[,2],
muValX <- unlist(muValX),
muValY <- unlist(muValY),
phiVal <- unlist(phiVal),
kaVal <- unlist(kaVal),
kbVal <- unlist(kbVal),
trueValX = trueValposX,
trueValY = trueValposY
)# Plot to compare the true value, measured values and estimates
ggplot(data) +
geom_path(aes(x = trueValposX, y = trueValposY, color = "True values")) +
geom_point(aes(x = trueValposX, y = trueValposY, color = "True values", shape = "True values")) +
geom_path(aes(x = measurements.x, y = measurements.y, color = "Measurements")) +
geom_point(aes(x = measurements.x, y = measurements.y, color = "Measurements", shape = "Measurements")) +
geom_path(aes(x = estimX, y = estimY, color = "Estimates")) +
geom_point(aes(x = estimX, y = estimY, color = "Estimates", shape = "Estimates")) +
geom_ellipse(aes(x0 = muValX, y0 = muValY, a = kaVal, b = kbVal, angle = phiVal, color = "Interval", shape = 'Interval'), fill = 'yellow', alpha = 0.2) +
coord_fixed() +
labs(title = "Vehicle Position",
x = "X(m)",
y = "Y(m)",
color = "") +
xlim(c(0.00001, 310)) +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "Interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, 'Interval' = 1), name = "")The circles on the plot represent the 95% confidence ellipse. Since the x and y axes’ measurement errors are equal, the confidence ellipse is a circle. Our Kalman Filter is designed for a constant acceleration model. Nevertheless, due to a properly chosen \(\sigma^2_{a}\) parameter it succeeds in tracking maneuvering vehicle, that is, the vehicle experiencing acceleration due to the circular motion - angular acceleration.
Although, as we can observe from the plot above, the difference between the measurement and the estimation seems to be not significant. This is because the given measurements are too precise compared to the true values and we can not see if the Kalman Filter really works or not. Therefore, we create noisier measurements to check this out. Apart from this, as we are going to use noisier measurements, then we need to increase the value of R for the Kalman Filter to be able to estimate and to set the weight of the average correctly.
# Noisy measurements for position in x and y axis
x <- trueValposX + rnorm(35, 0, 15)
y <- trueValposY + rnorm(35, 0, 15)
zn <- cbind(x, y)
# Increase the value of R
sigma_xm <- 10
sigma_ym <- 10
R <- rbind(c(sigma_xm**2, 0), c(0, sigma_ym**2))
result <- multiKF(transMat, processNoiseMat, H, R, x00, P00, zn)
x <- result[[1]]
p <- result[[2]]
k <- result[[3]]# Take the estimates in the x and y axis
estimatesX <- c()
estimatesY <- c()
for (i in seq(3, 70, by = 2)){
estimatesX = append(estimatesX, list(x[[i]][1]))
estimatesY = append(estimatesY, list(x[[i]][4]))
}
estimatesX = append(estimatesX, list(19.3))
estimatesY = append(estimatesY, list(297.79))
# Confidence ellipse for the probability of 95%:
scaleFactor <- sqrt(-2*log(1 - 0.95))
muValX <- c()
muValY <- c()
phiVal <- c()
kaVal <- c()
kbVal <- c()
for (i in seq(3, 70, by = 2)){
# Get the covariance matrix for each estimation
sigma2 <- rbind(c(p[[i]][1,1], 0),
c(0, p[[i]][4,4]))
# Get the eigen values and eigen vectors
res <- eigen(sigma2)
# θ = arctan(vy/vx)
phiVal <- append(phiVal, list(atan(res$vectors[2,1] / res$vectors[1,1])))
# k * the highest eigenvalue square root
kaVal <- append(kaVal, list(scaleFactor * sqrt(res$values[1])))
# k * the second eigenvalue square root
kbVal <- append(kbVal, list(scaleFactor * sqrt(res$values[2])))
# Centre of the ellipse, each estimation in the x and y axis
muValX <- append(muValX, list(c(x[[i]][1])))
muValY <- append(muValY, list(c(x[[i]][4])))
}
muValX <- append(muValX, c(19.3))
muValY <- append(muValY, c(297.79))
phiVal <- append(phiVal, c(-1.570796))
kaVal <- append(kaVal, c(scaleFactor * sqrt(5)))
kbVal <- append(kbVal, c(scaleFactor * sqrt(5)))# Dataframe with all the data
data <- data.frame(
estimX = unlist(estimatesX),
estimY = unlist(estimatesY),
measurements.x = zn[,1],
measurements.y = zn[,2],
muValX <- unlist(muValX),
muValY <- unlist(muValY),
phiVal <- unlist(phiVal),
kaVal <- unlist(kaVal),
kbVal <- unlist(kbVal),
trueValX = trueValposX,
trueValY = trueValposY
)# Plot to compare the true value, measured values and estimates
ggplot(data) +
geom_path(aes(x = trueValposX, y = trueValposY, color = "True values")) +
geom_point(aes(x = trueValposX, y = trueValposY, color = "True values", shape = "True values")) +
geom_path(aes(x = estimX, y = estimY, color = "Estimates")) +
geom_point(aes(x = estimX, y = estimY, color = "Estimates", shape = "Estimates")) +
geom_path(aes(x = measurements.x, y = measurements.y, color = "Measurements")) +
geom_point(aes(x = measurements.x, y = measurements.y, color = "Measurements", shape = "Measurements")) +
geom_ellipse(aes(x0 = muValX, y0 = muValY, a = kaVal, b = kbVal, angle = phiVal, color = "Interval", shape = 'Interval'), fill = 'yellow', alpha = 0.2) +
coord_fixed() +
labs(title = "Vehicle Position",
x = "X(m)",
y = "Y(m)",
color = "") +
xlim(c(0.00001, 310)) +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "Interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, 'Interval' = 1), name = "")Thus, we can see now that the Kalman Filter estimations are better than the noisy measurements. Even though at first the estimations are not quite good, as the Kalman Filter keeps up with the iterations they get closer to the true values of the position of the vehicle. \ We are going to see next how it affects the velocity.
Vehicle X axis velocity
Since the absolute velocity of the mobile is 26, the true values of
the velocity in the x axis will be: for the first 15s \(0\) and for the other 20s remaining \(-26\sin\theta(t)\) having that \(\theta(t) = \frac{(t-15) \pi}{38}\).
Besides, we will calculate the \(v_{i+1} =
\frac{x_{i+1} - x_{i}}{t_{i+1} - t_{i}}\) as if it was a naive
estimation of the velocity, in order to compare the estimations that we
got with the Kalman Filter with it.
# Take the velocity estimates in the x axis
estimatesX <- c()
for (i in seq(3, 70, by = 2)){
estimatesX = append(estimatesX, list(x[[i]][2]))
}
trueValX <- c(rep(0, 15), -26*sin(((16:34 - 15) * pi) / 38))
# Dataframe with all the data
dataVx <- data.frame(
recta = c(1:34),
estimX = unlist(estimatesX),
trueVal = trueValX,
# As ∆t = 1s then the naive estimations would be the difference between the noisy measurements
naiveVelXEst = diff(zn[,1])
)# Plot to compare the true value, measured values and estimates
plotVx <- ggplot(dataVx, aes(x = recta)) +
geom_path(aes(y = trueVal, color = "True values")) +
geom_point(aes(y = trueVal, color = "True values", shape = "True values")) +
geom_path(aes(y = estimX, color = "Estimates")) +
geom_point(aes(y = estimX, color = "Estimates", shape = "Estimates")) +
geom_path(aes(y = naiveVelXEst, color = "Naive")) +
geom_point(aes(y = naiveVelXEst, color = "Naive", shape = "Naive")) +
labs(title = "Vehicle X-axis velocity",
x = "Time(s)",
y = "Vx(m/s)",
color = "") +
scale_color_manual(values = c("Estimates" = "red", "True values" = "green", "Naive" = "purple"), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "True values" = 18, "Naive" = 1), name = "")
ggplotly(plotVx) %>% config(displayModeBar = FALSE)We can observe in this plot that the estimates with time get much better than the naive estimations, this means that the Kalman Filter is in fact a good estimator for this example, even though we did not have measurements of the velocity and the ones we got about position were noisy.
Vehicle Y axis velocity
Since the absolute velocity of the mobile is 26, the true values of
the velocity in the y axis will be: for the first 15s \(26\) and for the other 20s remaining \(\cos\theta(t)\) having \(\theta(t) = \frac{(t-15) \pi}{38}\).
Besides, we will calculate the \(v_{i+1} =
\frac{y_{i+1} - y_{i}}{t_{i+1} - t_{i}}\) as if it was a naive
estimation of the velocity, in order to compare the estimations that we
got with the Kalman Filter with it.
# Take the velocity estimates in the y axis
estimatesY <- c()
for (i in seq(3, 70, by = 2)){
estimatesY = append(estimatesY, list(x[[i]][5]))
}
trueValY <- c(rep(26, 15), 26*cos(((16:34 - 15) * pi) / 38))
# Dataframe with all the data
dataVy <- data.frame(
recta = c(1:34),
estimY = unlist(estimatesY),
trueVal = trueValY,
# As ∆t = 1s then the naive estimations would be the difference between the noisy measurements
naiveVelXEst = diff(zn[,2])
)# Plot to compare the true value, measured values and estimates
plotVy <- ggplot(dataVy, aes(x = recta)) +
geom_path(aes(y = trueVal, color = "True values")) +
geom_point(aes(y = trueVal, color = "True values", shape = "True values")) +
geom_path(aes(y = estimY, color = "Estimates")) +
geom_point(aes(y = estimY, color = "Estimates", shape = "Estimates")) +
geom_path(aes(y = naiveVelXEst, color = "Naive")) +
geom_point(aes(y = naiveVelXEst, color = "Naive", shape = "Naive")) +
labs(title = "Vehicle Y-axis velocity",
x = "Time(s)",
y = "Vy(m/s)",
color = "") +
scale_color_manual(values = c("Estimates" = "red", "True values" = "green", "Naive" = "purple"), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "True values" = 18, "Naive" = 1), name = "")
ggplotly(plotVy) %>% config(displayModeBar = FALSE)We come to the same conclusions than for the last plot, great use of the Kalman Filter.
Example 3 - estimate the vehicle’s state, position and velocity
Here we will implement an instance of the Kalman filter that involves
navigation, where we utilize sensor data from an Inertial Measurement
Unit (IMU) and a Global Navigation Satellite System (GNSS) receiver to
estimate the vehicle’s state, position and velocity. This particular
scenario focuses on determining position and velocity, excluding any
consideration of attitude information. Therefore, it represents a good
example of when we have non-reliable information about the position of a
vehicle, but more or less reliable information about its velocity and
acceleration. The state vector, in this context, encompasses the
three-dimensional representation of position and velocity:
The
system state \(x_{n}\) is defined by:
\[ \textbf{x} =
\begin{bmatrix}
\textbf{p} \\
\textbf{v}
\end{bmatrix} =
\begin{bmatrix}
p_{x} \\
p_{y} \\
p_{z} \\
v_{x} \\
v_{y} \\
v_{z}
\end{bmatrix}
\] Where \(p\) and
\(v\) are the position vector and the
velocity vector whose elements are defined in x, y, z axes,
respectively. So the state in time \(k\) can be predicted by the previous state
in time \(k−1\) as:
\[ \textbf{x}_k = \begin{bmatrix} \textbf{p}_k \\ \textbf{v}_k \end{bmatrix} = \begin{bmatrix} \textbf{p}_{k-1} + \textbf{v}_{k-1}\Delta t + \frac{1}{2}\tilde{\textbf{a}}_{k-1}\Delta t^2 \\ \textbf{v}_{k-1} + \tilde{\textbf{a}}_{k-1} \Delta t \end{bmatrix} \]
where \(\tilde{\textbf{a}}_{k-1}\) is the acceleration applied to the vehicle. The above equation can be rearranged as: \[ \mathbf{x}_k = \begin{bmatrix} \mathbf{I}_{3 \times 3} & \mathbf{I}_{3 \times 3}\Delta t \\ 0_{3 \times 3} & \mathbf{I}_{3 \times 3} \end{bmatrix} \mathbf{x}_{k-1} + \begin{bmatrix} \frac{1}{2}\mathbf{I}_{3 \times 3}\Delta t^2 \\ \mathbf{I}_{3 \times 3} \Delta t \end{bmatrix} \tilde{\textbf{a}}_{k-1} \]
The process noise comes from the accelerometer output, \(a_{k-1} = \tilde{a}_{k-1} + e_{k-1}\), where \(e_{k-1}\) denotes the noise of the accelerometer output. Suppose \(e_{k-1} \sim \mathcal{N}(0, \mathbf{I}_{3 \times 3}\sigma^2_e)\). From the covariance relationship, \(\text{Cov}(Ax) = A\Sigma A^T\) where \(\text{Cov}(x) = \Sigma\), we get the covariance matrix of the process noise as:
\[ Q = \begin{bmatrix} \frac{1}{2}\mathbf{I}_{3 \times 3}\Delta t^2 \\ \mathbf{I}_{3 \times 3}\Delta t \\ \end{bmatrix} \mathbf{I}_{3 \times 3}\sigma_e^2 \begin{bmatrix} \frac{1}{2}\mathbf{I}_{3 \times 3}\Delta t^2 \\ \mathbf{I}_{3 \times 3}\Delta t \\ \end{bmatrix}^T = \begin{bmatrix} \frac{1}{4}\mathbf{I}_{3 \times 3}\Delta t^4 & 0_{3 \times 3} \\ 0_{3 \times 3} & \mathbf{I}_{3 \times 3}\Delta t^2 \\ \end{bmatrix} \sigma_e^2 \]
Now, we have the process model as: \[\mathbf{x}_k = \mathbf{F}\mathbf{x}_{k-1} + \mathbf{G}\mathbf{a}_{k-1} + \mathbf{w}_{k-1} \]
Where \[ \mathbf{F} = \begin{bmatrix} \mathbf{I}_{3 \times 3} & \mathbf{I}_{3 \times 3}\Delta t \\ 0_{3 \times 3} & \mathbf{I}_{3 \times 3} \\ \end{bmatrix} \quad \quad \mathbf{G} = \begin{bmatrix} \frac{1}{2}\mathbf{I}_{3 \times 3}\Delta t^2 \\ \mathbf{I}_{3 \times 3} \Delta t \\ \end{bmatrix} \quad \quad \mathbf{w}_{k-1} \sim \mathcal{N}(0, Q) \]
The GNSS receiver provides position and velocity measurements corrupted by measurement noise \(\nu_k\) as: \[ z_k = \begin{bmatrix} \textbf{p}_k \\ \textbf{v}_k \end{bmatrix} + \mathbf{\nu}_k = H\mathbf{x}_k + \mathbf{\nu}_k \quad \quad \text{where} \quad H = \mathbf{I}_{6 \times 6} \quad \text{and} \quad \nu_k \sim \mathcal{N}(0, R) \]
To simulate the system’s behavior, let’s consider \(N=20\) time steps \((k=1,2,3,...,N)\) with a time increment of \(\Delta t=1\). We also want to initially generate a time history of the true state, or a true trajectory. A convenient approach involves creating a series of true accelerations over time and integrating them to obtain true velocity and position.
In this specific scenario, we set the true acceleration to 0, and the vehicle moves with a constant velocity, \(\mathbf{v}_k = [5, 5, 0]^T\) for all \(k=1,2,3,...,N\), starting from the initial position \(\mathbf{p}_0 = [0, 0, 0]\). Importantly, when utilizing the Kalman filter to estimate the vehicle state, it’s often unknown whether the vehicle maintains a constant velocity. That is, this situation is analogous to the case with nonzero acceleration in the context of Kalman filter models.
We must introduce noise into the acceleration output and GNSS measurements at each time step. Assuming that the acceleration output, GNSS position, and GNSS velocity are subject to noise with variances of \(0.3^2\), \(3^2\), and \(0.03^2\), respectively.
To construct the process noise covariance matrix \(Q\) and measurement noise covariance matrix \(R\), we can use the specified noise statistics to optimize performance. It’s important to note that in practical scenarios, we often lack knowledge of the true statistics of the noises, and the noises may deviate from Gaussian distributions. A common approach is to prudently set \(Q\) and \(R\) slightly larger than the anticipated values to ensure robustness.
Initial guesses \[ \mathbf{\hat{x}}_\mathbf{0}^+ = \begin{bmatrix} 2 \\ -2 \\ 0 \\ 5 \\ 5.1 \\ 0.1 \end{bmatrix} \quad \quad \mathbf{P}_\mathbf{0}^+ = \begin{bmatrix} \mathbf{I}_{3 \times 3} 4^2 & 0_{3 \times 3} \\ 0_{3 \times 3} & \mathbf{I}_{3 \times 3}0.4^2 \end{bmatrix} \]
Noise covariance matrices \[ Q = \begin{bmatrix} \frac{1}{4}\mathbf{I}_{3 \times 3}\Delta t^4 & 0_{3 \times 3} \\ 0_{3 \times 3} & \mathbf{I}_{3 \times 3}\Delta t^2 \end{bmatrix} 0.3^2 \quad \quad R = \begin{bmatrix} \mathbf{I}_{3 \times 3} 3^2 & 0_{3 \times 3} \\ 0_{3 \times 3} & \mathbf{I}_{3 \times 3} 0.03^2 \end{bmatrix} \]
where \(Q\) and \(R\) are constant for every time step. The more uncertain our initial guess for the state is, the larger the initial error covariance should be.
Application of the Kalman Filter
Inicialization and definition of the parameters
# Initialization
sigma_a <- 0.3^2
sigma_x <- 3^2
sigam_v <- 0.03^2
# ∆t = 1s
transF <- rbind(c(1, 0, 0, 1, 0, 0),
c(0, 1, 0, 0, 1, 0),
c(0, 0, 1, 0, 0, 1),
c(0, 0, 0, 1, 0, 0),
c(0, 0, 0, 0, 1, 0),
c(0, 0, 0, 0, 0, 1))
Q <- rbind(c(1/4, 0, 0, 0, 0, 0),
c(0, 1/4, 0, 0, 0, 0),
c(0, 0, 1/4, 0, 0, 0),
c(0, 0, 0, 1, 0, 0),
c(0, 0, 0, 0, 1, 0),
c(0, 0, 0, 0, 0, 1)) * sigma_a
H <- rbind(c(1, 0, 0, 0, 0, 0),
c(0, 1, 0, 0, 0, 0),
c(0, 0, 1, 0, 0, 0),
c(0, 0, 0, 1, 0, 0),
c(0, 0, 0, 0, 1, 0),
c(0, 0, 0, 0, 0, 1))
R <- rbind(c(sigma_x, 0, 0, 0, 0, 0),
c(0, sigma_x, 0, 0, 0, 0),
c(0, 0, sigma_x, 0, 0, 0),
c(0, 0, 0, sigam_v, 0, 0),
c(0, 0, 0, 0, sigam_v, 0),
c(0, 0, 0, 0, 0, sigam_v))
G <- rbind(c(1/2, 0, 0),
c(0, 1/2, 0),
c(0, 0, 1/2),
c(1, 0, 0),
c(0, 1, 0),
c(0, 0, 1))
x00 <- c(2, -2, 0, 5, 5.1, 0.1)
p00 <- rbind(c(4^2, 0, 0, 0, 0, 0),
c(0, 4^2, 0, 0, 0, 0),
c(0, 0, 4^2, 0, 0, 0),
c(0, 0, 0, 0.4^2, 0, 0),
c(0, 0, 0, 0, 0.4^2, 0),
c(0, 0, 0, 0, 0, 0.4^2))Now we are going to simulate the measurements and the true values, taking into account that, in this example, the true acceleration is set to 0 and the vehicle is moving with a constant velocity \(v_k=[5,5,0]^T\) for all \(k=1,2,3,…,N\), from the initial position \(p_0=[0,0,0]\).
dt <- 1
N <- 20
# True values
initial_position <- c(0, 0, 1)
constant_velocity <- c(5, 5, 0)
truePosition <- list(initial_position)
for (i in 1:(N-1)){
truePosition <- append(truePosition, list(truePosition[[length(truePosition)]] + dt*constant_velocity))
}
# Simulated measured values, true values plus the generated noise with its respective noise variance
simPos <- unlist(truePosition) + replicate(N, (rnorm(3, sd = 3^2)))
simVel <- replicate(N, constant_velocity) + replicate(N, (rnorm(3, sd = 0.03^2)))
simAcc <- replicate(N, c(0, 0, 0)) + replicate(N, (rnorm(3, sd = 0.3^2)))# Take the position simulated measure in the x axis
measPosX <- c()
for (i in seq(1, 60, by = 3)){
measPosX = append(measPosX, list(simPos[i]))
}
# Take the position simulated measure in the y axis
measPosY <- c()
for (i in seq(2, 60, by = 3)){
measPosY = append(measPosY, list(simPos[i]))
}
# Take the position simulated measure in the z axis
measPosZ <- c()
for (i in seq(3, 60, by = 3)){
measPosZ = append(measPosZ, list(simPos[i]))
}# Take the velocity simulated measure in the x axis
measVelX <- c()
for (i in seq(1, 60, by = 3)){
measVelX = append(measVelX, list(simVel[i]))
}
# Take the velocity simulated measure in the y axis
measVelY <- c()
for (i in seq(2, 60, by = 3)){
measVelY = append(measVelY, list(simVel[i]))
}
# Take the velocity simulated measure in the z axis
measVelZ <- c()
for (i in seq(3, 60, by = 3)){
measVelZ = append(measVelZ, list(simVel[i]))
}# Take the acceleration simulated measure in the x axis
measAccX <- c()
for (i in seq(1, 60, by = 3)){
measAccX = append(measAccX, list(simAcc[i]))
}
# Take the acceleration simulated measure in the y axis
measAccY <- c()
for (i in seq(2, 60, by = 3)){
measAccY = append(measAccY, list(simAcc[i]))
}
# Take the acceleration simulated measure in the z axis
measAccZ <- c()
for (i in seq(3, 60, by = 3)){
measAccZ = append(measAccZ, list(simAcc[i]))
}# Acceleration an of dimension 3x1 each 20 measurements
an <- cbind(unlist(measAccX), unlist(measAccY), unlist(measAccZ))
# Position and velocity hn of dimension 6x1 each 20 measurements
hn <- cbind(unlist(measPosX), unlist(measPosY), unlist(measPosZ),
unlist(measVelX), unlist(measVelY), unlist(measVelZ))Generalization, apply the Kalman Filter
multiKF2 <- function(transMat, Q, H, R, G, iniX, iniP, hn, an){
# Initialization
x <- list(iniX)
u <- list(c(0, 0, 0))
p <- list(iniP)
k <- list(c(0, 0, 0))
# Prediction
x <- append(x, list(transMat %*% x[[1]] + G %*% u[[1]]))
p <- append(p, list(transMat %*% p[[1]] %*% t(transMat) + Q))
for (i in 2:N){
# Update
k <- append(k, list(p[[length(p)]] %*% t(H) %*% solve(H %*% p[[length(p)]] %*% t(H) + R)))
# Estimate the current state
x <- append(x, list(x[[length(x)]] + k[[i]] %*% (hn[i-1,] - H %*% x[[length(x)]])))
# Update the current estimate uncertainty
p <- append(p, list((diag(dim(k[[i]])[1]) - k[[i]] %*% H) %*% p[[length(p)]] %*% t(diag(dim(k[[i]])[1]) - k[[i]] %*% H) + k[[i]] %*% R %*% t(k[[i]])))
# Predict
x <- append(x, list(transMat %*% x[[length(x)]] + G %*% an[i-1,]))
p <- append(p, list(transMat %*% p[[length(p)]] %*% t(transMat) + Q))
}
return(list(x, p, k))
}
result <- multiKF2(transF, Q, H, R, G, x00, p00, hn, an)
x <- result[[1]]
p <- result[[2]]
k <- result[[3]]Visualization
Now we are going to make some plots to analyze the results that we obtained and compare with the true values and the simulated measurements.
X axis for velocity
# Take the estimates in the x axis
estimatesX <- c()
for (i in seq(1, 40, by = 2)){
estimatesX = append(estimatesX, list(x[[i]][4]))
}
# Take the estimates in the x axis
estimSigmaX <- c()
for (i in seq(1, 40, by = 2)){
estimSigmaX = append(estimSigmaX, list(p[[i]][4,4]))
}# Dataframe with all the data
data <- data.frame(
time = c(1:20),
estimX = unlist(estimatesX),
measurements.x = hn[,4],
# the 95% confidence interval is given by [-1.96 sqrt(V); +1.96 sqrt(V)]
interMax = unlist(estimatesX) + 1.96 * sqrt(unlist(estimSigmaX)),
interMin = unlist(estimatesX) - 1.96 * sqrt(unlist(estimSigmaX)),
trueValX = replicate(N, constant_velocity[1])
)# Plot to compare the true value, measured values and estimates
plot1 <- ggplot(data, aes(x = time)) +
geom_path(aes(y = measurements.x, color = "Measurements")) +
geom_point(aes(y = measurements.x, color = "Measurements", shape = "Measurements")) +
geom_path(aes(y = trueValX, color = "True values")) +
geom_point(aes(y = trueValX, color = "True values", shape = "True values")) +
geom_path(aes(y = estimX, color = "Estimates")) +
geom_point(aes(y = estimX, color = "Estimates", shape = "Estimates")) +
geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
labs(title = "Vehicle velocity in x axis",
x = "Time(s)",
y = "Velocity(m/s)",
color = "") +
# ylim(c(-10,10)) +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "95% confidence interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, '95% confidence interval' = 1), name = "")
ggplotly(plot1) %>% config(displayModeBar = FALSE)Y axis for velocity
# Take the estimates in the y axis
estimatesY <- c()
for (i in seq(1, 40, by = 2)){
estimatesY = append(estimatesY, list(x[[i]][5]))
}
# Take the estimates
estimSigmaY <- c()
for (i in seq(1, 40, by = 2)){
estimSigmaY = append(estimSigmaY, list(p[[i]][5,5]))
}# Dataframe with all the data
data <- data.frame(
time = c(1:20),
estimX = unlist(estimatesY),
measurements.x = hn[,5],
# the 95% confidence interval is given by [-1.96 sqrt(V); +1.96 sqrt(V)]
interMax = unlist(estimatesY) + 1.96 * sqrt(unlist(estimSigmaY)),
interMin = unlist(estimatesY) - 1.96 * sqrt(unlist(estimSigmaY)),
trueValX = replicate(N, constant_velocity[2])
)# Plot to compare the true value, measured values and estimates
plot1 <- ggplot(data, aes(x = time)) +
geom_path(aes(y = measurements.x, color = "Measurements")) +
geom_point(aes(y = measurements.x, color = "Measurements", shape = "Measurements")) +
geom_path(aes(y = trueValX, color = "True values")) +
geom_point(aes(y = trueValX, color = "True values", shape = "True values")) +
geom_path(aes(y = estimX, color = "Estimates")) +
geom_point(aes(y = estimX, color = "Estimates", shape = "Estimates")) +
geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
labs(title = "Vehicle velocity in y axis",
x = "Time(s)",
y = "Velocity(m/s)",
color = "") +
# ylim(c(-10,10)) +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "95% confidence interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, '95% confidence interval' = 1), name = "")
ggplotly(plot1) %>% config(displayModeBar = FALSE)Z axis for velocity
# Take the estimates in the z axis
estimatesZ <- c()
for (i in seq(1, 40, by = 2)){
estimatesZ = append(estimatesZ, list(x[[i]][6]))
}
# Take the estimates
estimSigmaZ <- c()
for (i in seq(1, 40, by = 2)){
estimSigmaZ = append(estimSigmaZ, list(p[[i]][6,6]))
}# Dataframe with all the data
data <- data.frame(
time = c(1:20),
estimX = unlist(estimatesZ),
measurements.x = hn[,6],
# the 95% confidence interval is given by [-1.96 sqrt(V); +1.96 sqrt(V)]
interMax = unlist(estimatesZ) + 1.96 * sqrt(unlist(estimSigmaZ)),
interMin = unlist(estimatesZ) - 1.96 * sqrt(unlist(estimSigmaZ)),
trueValX = replicate(N, constant_velocity[3])
)# Plot to compare the true value, measured values and estimates
plot1 <- ggplot(data, aes(x = time)) +
geom_path(aes(y = measurements.x, color = "Measurements")) +
geom_point(aes(y = measurements.x, color = "Measurements", shape = "Measurements")) +
geom_path(aes(y = trueValX, color = "True values")) +
geom_point(aes(y = trueValX, color = "True values", shape = "True values")) +
geom_path(aes(y = estimX, color = "Estimates")) +
geom_point(aes(y = estimX, color = "Estimates", shape = "Estimates")) +
geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
labs(title = "Vehicle velocity in z axis",
x = "Time(s)",
y = "Velocity(m/s)",
color = "") +
# ylim(c(-10,10)) +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "95% confidence interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, '95% confidence interval' = 1), name = "")
ggplotly(plot1) %>% config(displayModeBar = FALSE)XY axis for position
# Take the estimates in the x axis
estimatesX <- c()
for (i in seq(1, 40, by = 2)){
estimatesX = append(estimatesX, list(x[[i]][1]))
}
# Take the estimates in the x axis
estimSigmaX <- c()
for (i in seq(1, 40, by = 2)){
estimSigmaX = append(estimSigmaX, list(p[[i]][1,1]))
}
# Take the true value position in the x axis
truePosX <- c()
for (i in 1:N){
truePosX = append(truePosX, list(unlist(truePosition[[i]])[1]))
}
# Take the estimates in the y axis
estimatesY <- c()
for (i in seq(1, 40, by = 2)){
estimatesY = append(estimatesY, list(x[[i]][2]))
}
# Take the estimates in the y axis
estimSigmaY <- c()
for (i in seq(1, 40, by = 2)){
estimSigmaY = append(estimSigmaY, list(p[[i]][2,2]))
}
# Take the true value position in the y axis
truePosY <- c()
for (i in 1:N){
truePosY = append(truePosY, list(unlist(truePosition[[i]])[2]))
}
# Take the estimates in the z axis
estimatesZ <- c()
for (i in seq(1, 40, by = 2)){
estimatesZ = append(estimatesZ, list(x[[i]][3]))
}
# Take the estimates in the z axis
estimSigmaZ <- c()
for (i in seq(1, 40, by = 2)){
estimSigmaZ = append(estimSigmaZ, list(p[[i]][3,3]))
}
# Take the true value position in the z axis
truePosZ <- c()
for (i in 1:N){
truePosZ = append(truePosZ, list(unlist(truePosition[[i]])[3]))
}# Confidence ellipse for the probability of 95%:
scaleFactor <- sqrt(-2*log(1 - 0.95))
muValX <- c()
muValY <- c()
phiVal <- c()
kaVal <- c()
kbVal <- c()
for (i in seq(1, 40, by = 2)){
# Get the covariance matrix for each estimation
sigma2 <- rbind(c(p[[i]][1,1], 0),
c(0, p[[i]][2,2]))
# Get the eigen values and eigen vectors
res <- eigen(sigma2)
# θ = arctan(vy/vx)
phiVal <- append(phiVal, list(atan(res$vectors[2,1] / res$vectors[1,1])))
# k * the highest eigenvalue square root
kaVal <- append(kaVal, list(scaleFactor * sqrt(res$values[1])))
# k * the second eigenvalue square root
kbVal <- append(kbVal, list(scaleFactor * sqrt(res$values[2])))
# Centre of the ellipse, each estimation in the x and y axis
muValX <- append(muValX, list(c(x[[i]][1])))
muValY <- append(muValY, list(c(x[[i]][2])))
}# Dataframe with all the data
data2 <- data.frame(
estimX = unlist(estimatesX),
estimY = unlist(estimatesY),
measurements.x = hn[,1],
measurements.y = hn[,2],
trueValX = unlist(truePosX),
trueValY = unlist(truePosY),
muValX <- unlist(muValX),
muValY <- unlist(muValY),
phiVal <- unlist(phiVal),
kaVal <- unlist(kaVal),
kbVal <- unlist(kbVal)
)# Plot to compare the true value, measured values and estimates
ggplot(data2) +
geom_path(aes(x = measurements.x, y = measurements.y, color = "Measurements")) +
geom_point(aes(x = measurements.x, y = measurements.y, color = "Measurements", shape = "Measurements")) +
geom_path(aes(x = trueValX, y = trueValY, color = "True values")) +
geom_point(aes(x = trueValX, y = trueValY, color = "True values", shape = "True values")) +
geom_path(aes(x = estimX, y = estimY, color = "Estimates")) +
geom_point(aes(x = estimX, y = estimY, color = "Estimates", shape = "Estimates")) +
geom_ellipse(aes(x0 = muValX, y0 = muValY, a = kaVal, b = kbVal, angle = phiVal, color = "Interval", shape = 'Interval'), fill = 'yellow', alpha = 0.2) +
coord_fixed() +
labs(title = "Vehicle Position",
x = "X(m)",
y = "Y(m)",
color = "") +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "Interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, 'Interval' = 1), name = "")XZ axis for position
# Confidence ellipse for the probability of 95%:
scaleFactor <- sqrt(-2*log(1 - 0.95))
muValX <- c()
muValY <- c()
phiVal <- c()
kaVal <- c()
kbVal <- c()
for (i in seq(1, 40, by = 2)){
# Get the covariance matrix for each estimation
sigma2 <- rbind(c(p[[i]][1,1], 0),
c(0, p[[i]][3,3]))
# Get the eigen values and eigen vectors
res <- eigen(sigma2)
# θ = arctan(vy/vx)
phiVal <- append(phiVal, list(atan(res$vectors[2,1] / res$vectors[1,1])))
# k * the highest eigenvalue square root
kaVal <- append(kaVal, list(scaleFactor * sqrt(res$values[1])))
# k * the second eigenvalue square root
kbVal <- append(kbVal, list(scaleFactor * sqrt(res$values[2])))
# Centre of the ellipse, each estimation in the x and z axis
muValX <- append(muValX, list(c(x[[i]][1])))
muValY <- append(muValY, list(c(x[[i]][3])))
}# Dataframe with all the data
data2 <- data.frame(
estimX = unlist(estimatesX),
estimY = unlist(estimatesZ),
measurements.x = hn[,1],
measurements.y = hn[,3],
trueValX = unlist(truePosX),
trueValY = unlist(truePosZ),
muValX <- unlist(muValX),
muValY <- unlist(muValY),
phiVal <- unlist(phiVal),
kaVal <- unlist(kaVal),
kbVal <- unlist(kbVal)
)# Plot to compare the true value, measured values and estimates
ggplot(data2) +
geom_path(aes(x = measurements.x, y = measurements.y, color = "Measurements")) +
geom_point(aes(x = measurements.x, y = measurements.y, color = "Measurements", shape = "Measurements")) +
geom_path(aes(x = trueValX, y = trueValY, color = "True values")) +
geom_point(aes(x = trueValX, y = trueValY, color = "True values", shape = "True values")) +
geom_path(aes(x = estimX, y = estimY, color = "Estimates")) +
geom_point(aes(x = estimX, y = estimY, color = "Estimates", shape = "Estimates")) +
geom_ellipse(aes(x0 = muValX, y0 = muValY, a = kaVal, b = kbVal, angle = phiVal, color = "Interval", shape = 'Interval'), fill = 'yellow', alpha = 0.2) +
coord_fixed() +
labs(title = "Vehicle Position",
x = "X(m)",
y = "Z(m)",
color = "") +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "Interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, 'Interval' = 1), name = "")YZ axis for position
# Confidence ellipse for the probability of 95%:
scaleFactor <- sqrt(-2*log(1 - 0.95))
muValX <- c()
muValY <- c()
phiVal <- c()
kaVal <- c()
kbVal <- c()
for (i in seq(1, 40, by = 2)){
# Get the covariance matrix for each estimation
sigma2 <- rbind(c(p[[i]][2,2], 0),
c(0, p[[i]][3,3]))
# Get the eigen values and eigen vectors
res <- eigen(sigma2)
# θ = arctan(vy/vx)
phiVal <- append(phiVal, list(atan(res$vectors[2,1] / res$vectors[1,1])))
# k * the highest eigenvalue square root
kaVal <- append(kaVal, list(scaleFactor * sqrt(res$values[1])))
# k * the second eigenvalue square root
kbVal <- append(kbVal, list(scaleFactor * sqrt(res$values[2])))
# Centre of the ellipse, each estimation in the x and z axis
muValX <- append(muValX, list(c(x[[i]][2])))
muValY <- append(muValY, list(c(x[[i]][3])))
}# Dataframe with all the data
data2 <- data.frame(
estimX = unlist(estimatesY),
estimY = unlist(estimatesZ),
measurements.x = hn[,2],
measurements.y = hn[,3],
trueValX = unlist(truePosY),
trueValY = unlist(truePosZ),
muValX <- unlist(muValX),
muValY <- unlist(muValY),
phiVal <- unlist(phiVal),
kaVal <- unlist(kaVal),
kbVal <- unlist(kbVal)
)# Plot to compare the true value, measured values and estimates
ggplot(data2) +
geom_path(aes(x = measurements.x, y = measurements.y, color = "Measurements")) +
geom_point(aes(x = measurements.x, y = measurements.y, color = "Measurements", shape = "Measurements")) +
geom_path(aes(x = trueValX, y = trueValY, color = "True values")) +
geom_point(aes(x = trueValX, y = trueValY, color = "True values", shape = "True values")) +
geom_path(aes(x = estimX, y = estimY, color = "Estimates")) +
geom_point(aes(x = estimX, y = estimY, color = "Estimates", shape = "Estimates")) +
geom_ellipse(aes(x0 = muValX, y0 = muValY, a = kaVal, b = kbVal, angle = phiVal, color = "Interval", shape = 'Interval'), fill = 'yellow', alpha = 0.2) +
coord_fixed() +
labs(title = "Vehicle Position",
x = "Y(m)",
y = "Z(m)",
color = "") +
scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "Interval" = 'yellow'), name = "") +
scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, 'Interval' = 1), name = "")Therefore, we can observe in this three plots that even thought the measurements of the position are really noisy the Kalman Filter estimates accurately the values of the position of the vehicle in the x, y and z axis.
In conclusion, the Kalman Filter is a versatile and efficient tool for state estimation in the presence of noisy measurements. Its recursive nature, adaptability to dynamic systems, and ability to handle noise make it a widely adopted method in various fields where accurate state estimation is crucial.